7 research outputs found

    Fairness of Exposure in Rankings

    Full text link
    Rankings are ubiquitous in the online world today. As we have transitioned from finding books in libraries to ranking products, jobs, job applicants, opinions and potential romantic partners, there is a substantial precedent that ranking systems have a responsibility not only to their users but also to the items being ranked. To address these often conflicting responsibilities, we propose a conceptual and computational framework that allows the formulation of fairness constraints on rankings in terms of exposure allocation. As part of this framework, we develop efficient algorithms for finding rankings that maximize the utility for the user while provably satisfying a specifiable notion of fairness. Since fairness goals can be application specific, we show how a broad range of fairness constraints can be implemented using our framework, including forms of demographic parity, disparate treatment, and disparate impact constraints. We illustrate the effect of these constraints by providing empirical results on two ranking problems.Comment: In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, London, UK, 201

    RecRec: Algorithmic Recourse for Recommender Systems

    Full text link
    Recommender systems play an essential role in the choices people make in domains such as entertainment, shopping, food, news, employment, and education. The machine learning models underlying these recommender systems are often enormously large and black-box in nature for users, content providers, and system developers alike. It is often crucial for all stakeholders to understand the model's rationale behind making certain predictions and recommendations. This is especially true for the content providers whose livelihoods depend on the recommender system. Drawing motivation from the practitioners' need, in this work, we propose a recourse framework for recommender systems, targeted towards the content providers. Algorithmic recourse in the recommendation setting is a set of actions that, if executed, would modify the recommendations (or ranking) of an item in the desired manner. A recourse suggests actions of the form: "if a feature changes X to Y, then the ranking of that item for a set of users will change to Z." Furthermore, we demonstrate that RecRec is highly effective in generating valid, sparse, and actionable recourses through an empirical evaluation of recommender systems trained on three real-world datasets. To the best of our knowledge, this work is the first to conceptualize and empirically test a generalized framework for generating recourses for recommender systems.Comment: Accepted as a short paper at CIKM 202

    Fairness of Exposure for Ranking Systems

    No full text
    186 pagesRanking-based interfaces are ubiquitous in today's multi-sided online economies (such as online marketplaces, job search, property renting, media streaming). In these systems, the items to be ranked are products, job candidates, creative content, or other entities that transfer economic benefit. It is widely recognized that the position of an item in the ranking has a crucial influence on its exposure which directly translates into economic opportunity. Surprisingly, learning-to-rank (LTR) approaches typically do not consider their impact on the opportunity they provide to the items. Instead, most LTR algorithms solely focus on maximizing the utility of the rankings to the user issuing the query, while there is evidence that this does not necessarily lead to rankings that would be considered fair or desirable in many situations. This thesis proposes a conceptual and computational framework that allows the formulation of fairness constraints on rankings in terms of a merit-based exposure allocation. As a part of this framework, we develop efficient learning-to-rank algorithms that maximize the utility for the user while provably satisfying a specifiable notion of fairness. Since fairness goals can be application-specific, we show that a broad range of fairness constraints can be implemented in this framework using its expressive power to link relevance, merit, exposure, and impact. Beyond the theoretical evidence in deriving the frameworks and algorithms, empirical results on simulated and real-world datasets verify the effectiveness of the approach on both individual and group-fairness notions
    corecore